14 research outputs found
Development of a Practical Visual-Evoked Potential-Based Brain-Computer Interface
There are many different neuromuscular disorders that disrupt the normal communication pathways between the brain and the rest of the body. These diseases often leave patients in a `locked-in state, rendering them unable to communicate with their environment despite having cognitively normal brain function. Brain-computer interfaces (BCIs) are augmentative communication devices that establish a direct link between the brain and a computer. Visual evoked potential (VEP)- based BCIs, which are dependent upon the use of salient visual stimuli, are amongst the fastest BCIs available and provide the highest communication rates compared to other BCI modalities. However. the majority of research focuses solely on improving the raw BCI performance; thus, most visual BCIs still suffer from a myriad of practical issues that make them impractical for everyday use. The focus of this dissertation is on the development of novel advancements and solutions that increase the practicality of VEP-based BCIs. The presented work shows the results of several studies that relate to characterizing and optimizing visual stimuli. improving ergonomic design. reducing visual irritation, and implementing a practical VEP-based BCI using an extensible software framework and mobile devices platforms
Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time
This paper investigates how to utilize different forms of human interaction
to safely train autonomous systems in real-time by learning from both human
demonstrations and interventions. We implement two components of the
Cycle-of-Learning for Autonomous Systems, which is our framework for combining
multiple modalities of human interaction. The current effort employs human
demonstrations to teach a desired behavior via imitation learning, then
leverages intervention data to correct for undesired behaviors produced by the
imitation learner to teach novel tasks to an autonomous agent safely, after
only minutes of training. We demonstrate this method in an autonomous perching
task using a quadrotor with continuous roll, pitch, yaw, and throttle commands
and imagery captured from a downward-facing camera in a high-fidelity simulated
environment. Our method improves task completion performance for the same
amount of human interaction when compared to learning from demonstrations
alone, while also requiring on average 32% less data to achieve that
performance. This provides evidence that combining multiple modes of human
interaction can increase both the training speed and overall performance of
policies for autonomous systems.Comment: 9 pages, 6 figure